Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Añadir filtros

Asunto principal
Intervalo de año
1.
arxiv; 2024.
Preprint en Inglés | PREPRINT-ARXIV | ID: ppzbmed-2402.14203v1

RESUMEN

The COVID-19 pandemic of 2021 led to a worldwide health crisis that was accompanied by an infodemic. A group of 12 social media personalities, dubbed the ``Disinformation Dozen", were identified as key in spreading disinformation regarding the COVID-19 virus, treatments, and vaccines. This study focuses on the spread of disinformation propagated by this group on Telegram, a mobile messaging and social media platform. After segregating users into three groups -- the Disinformation Dozen, bots, and humans --, we perform an investigation with a dataset of Telegram messages from January to June 2023, comparatively analyzing temporal, topical, and network features. We observe that the Disinformation Dozen are highly involved in the initial dissemination of disinformation but are not the main drivers of the propagation of disinformation. Bot users are extremely active in conversation threads, while human users are active propagators of information, disseminating posts between Telegram channels through the forwarding mechanism.


Asunto(s)
COVID-19
2.
arxiv; 2024.
Preprint en Inglés | PREPRINT-ARXIV | ID: ppzbmed-2401.06582v1

RESUMEN

Social media platforms are a key ground of information consumption and dissemination. Key figures like politicians, celebrities and activists have leveraged on its wide user base for strategic communication. Strategic communications, or StratCom, is the deliberate act of information creation and distribution. Its techniques are used by these key figures for establishing their brand and amplifying their messages. Automated scripts are used on top of personal touches to quickly and effectively perform these tasks. The combination of automation and manual online posting creates a Cyborg social media profile, which is a hybrid between bot and human. In this study, we establish a quantitative definition for a Cyborg account, which is an account that are detected as bots in one time window, and identified as humans in another. This definition makes use of frequent changes of bot classification labels and large differences in bot likelihood scores to identify Cyborgs. We perform a large-scale analysis across over 3.1 million users from Twitter collected from two key events, the 2020 Coronavirus pandemic and 2020 US Elections. We extract Cyborgs from two datasets and employ tools from network science, natural language processing and manual annotation to characterize Cyborg accounts. Our analyses identify Cyborg accounts are mostly constructed for strategic communication uses, have a strong duality in their bot/human classification and are tactically positioned in the social media network, aiding these accounts to promote their desired content. Cyborgs are also discovered to have long online lives, indicating their ability to evade bot detectors, or the graciousness of platforms to allow their operations.

3.
IEEE Transactions on Network Science and Engineering ; 10(1):3-19, 2023.
Artículo en Inglés | ProQuest Central | ID: covidwho-2192115

RESUMEN

Social influence characterizes the change of an individual's stances in a complex social environment towards a topic. Two factors often govern the influence of stances in an online social network: endogenous influences driven by an individual's innate beliefs through the agent's past stances and exogenous influences formed by social network influence between users. Both endogenous and exogenous influences offer important cues to user susceptibility, thereby enhancing the predictive performance on stance changes or flipping. In this work, we propose a stance flipping prediction problem to identify Twitter agents that are susceptible to stance flipping towards the coronavirus vaccine (i.e., from pro-vaccine to anti-vaccine). Specifically, we design a social influence model where each agent has some fixed innate stance and a conviction of the stance that reflects the resistance to change;agents influence each other through the social network structure. From data collected between April 2020 to May 2021, our model achieves 86% accuracy in predicting agents that flip stances. Further analysis identifies that agents that flip stances have significantly more neighbors engaging in collective expression of the opposite stance, and 53.7% of the agents that flip stances are bots and bot agents require lesser social influence to flip stances.

4.
arxiv; 2021.
Preprint en Inglés | PREPRINT-ARXIV | ID: ppzbmed-2106.11076v2

RESUMEN

Social influence characterizes the change of an individual's stances in a complex social environment towards a topic. Two factors often govern the influence of stances in an online social network: endogenous influences driven by an individual's innate beliefs through the agent's past stances and exogenous influences formed by social network influence between users. Both endogenous and exogenous influences offer important cues to user susceptibility, thereby enhancing the predictive performance on stance changes or flipping. In this work, we propose a stance flipping prediction problem to identify Twitter agents that are susceptible to stance flipping towards the coronavirus vaccine (i.e., from pro-vaccine to anti-vaccine). Specifically, we design a social influence model where each agent has some fixed innate stance and a conviction of the stance that reflects the resistance to change; agents influence each other through the social network structure.From data collected between April 2020 to May 2021, our model achieves 86\% accuracy in predicting agents that flip stances. Further analysis identifies that agents that flip stances have significantly more neighbors engaging in collective expression of the opposite stance, and 53.7% of the agents that flip stances are bots and bot agents require lesser social influence to flip stances.

5.
arxiv; 2021.
Preprint en Inglés | PREPRINT-ARXIV | ID: ppzbmed-2104.01215v1

RESUMEN

The 2020 coronavirus pandemic has heightened the need to flag coronavirus-related misinformation, and fact-checking groups have taken to verifying misinformation on the Internet. We explore stories reported by fact-checking groups PolitiFact, Poynter and Snopes from January to June 2020, characterising them into six story clusters before then analyse time-series and story validity trends and the level of agreement across sites. We further break down the story clusters into more granular story types by proposing a unique automated method with a BERT classifier, which can be used to classify diverse story sources, in both fact-checked stories and tweets.

6.
arxiv; 2020.
Preprint en Inglés | PREPRINT-ARXIV | ID: ppzbmed-2010.10113v1

RESUMEN

We analyse a Singapore-based COVID-19 Telegram group with more than 10,000 participants. First, we study the group's opinion over time, focusing on four dimensions: participation, sentiment, topics, and psychological features. We find that engagement peaked when the Ministry of Health raised the disease alert level, but this engagement was not sustained. Second, we search for government-identified misinformation in the group. We find that government-identified misinformation is rare, and that messages discussing these pieces of misinformation express skepticism.


Asunto(s)
COVID-19
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA